71 research outputs found

    FPGA-Based PUF Designs: A Comprehensive Review and Comparative Analysis

    Get PDF
    Field-programmable gate arrays (FPGAs) have firmly established themselves as dynamic platforms for the implementation of physical unclonable functions (PUFs). Their intrinsic reconfigurability and profound implications for enhancing hardware security make them an invaluable asset in this realm. This groundbreaking study not only dives deep into the universe of FPGA-based PUF designs but also offers a comprehensive overview coupled with a discerning comparative analysis. PUFs are the bedrock of device authentication and key generation and the fortification of secure cryptographic protocols. Unleashing the potential of FPGA technology expands the horizons of PUF integration across diverse hardware systems. We set out to understand the fundamental ideas behind PUF and how crucially important it is to current security paradigms. Different FPGA-based PUF solutions, including static, dynamic, and hybrid systems, are closely examined. Each design paradigm is painstakingly examined to reveal its special qualities, functional nuances, and weaknesses. We closely assess a variety of performance metrics, including those related to distinctiveness, reliability, and resilience against hostile threats. We compare various FPGA-based PUF systems against one another to expose their unique advantages and disadvantages. This study provides system designers and security professionals with the crucial information they need to choose the best PUF design for their particular applications. Our paper provides a comprehensive view of the functionality, security capabilities, and prospective applications of FPGA-based PUF systems. The depth of knowledge gained from this research advances the field of hardware security, enabling security practitioners, researchers, and designers to make wise decisions when deciding on and implementing FPGA-based PUF solutions.publishedVersio

    Hand Gesture Classification Using Grayscale Thermal Images and Convolutional Neural Network

    Get PDF
    Accepted manuscript.acceptedVersio

    Single-channel speech enhancement using implicit Wiener filter for high-quality speech communication

    Get PDF
    Speech enables easy human-to-human communication as well as human-to-machine interaction. However, the quality of speech degrades due to background noise in the environment, such as drone noise embedded in speech during search and rescue operations. Similarly, helicopter noise, airplane noise, and station noise reduce the quality of speech. Speech enhancement algorithms reduce background noise, resulting in a crystal clear and noise-free conversation. For many applications, it is also necessary to process these noisy speech signals at the edge node level. Thus, we propose implicit Wiener filter-based algorithm for speech enhancement using edge computing system. In the proposed algorithm, a first order recursive equation is used to estimate the noise. The performance of the proposed algorithm is evaluated for two speech utterances, one uttered by a male speaker and the other by a female speaker. Both utterances are degraded by different types of non-stationary noises such as exhibition, station, drone, helicopter, airplane, and white Gaussian stationary noise with different signal-to-noise ratios. Further, we compare the performance of the proposed speech enhancement algorithm with the conventional spectral subtraction algorithm. Performance evaluations using objective speech quality measures demonstrate that the proposed speech enhancement algorithm outperforms the spectral subtraction algorithm in estimating the clean speech from the noisy speech. Finally, we implement the proposed speech enhancement algorithm, in addition to the spectral subtraction algorithm, on the Raspberry Pi 4 Model B, which is a low power edge computing device.publishedVersio

    Detection of Depression Using Weighted Spectral Graph Clustering With EEG Biomarkers

    Get PDF
    The alarming annual growth in the number of people affected by Major Depressive Disorder (MDD) is a problem on a global scale. In the primary scrutiny of depression, Electroencephalography (EEG) is one of the analytical tools available. Machine Learning (ML) and Deep Neural Networks (DNN) methods are the most common techniques for MDD diagnosis using EEG. However, these ML methods heavily rely on manually annotated EEG signals, which can only be generated by experts, for training. This also necessitates a large amount of memory and time constraints. The requirement of huge amounts of data to foresee emerging tendencies or undiscovered alignments is enforced. This article develops an unsupervised learning method for identifying MDD in light of these difficulties. The preprocessed EEG is used to extract three quantitative biomarkers (Band Power: Beta, Delta, and Theta), and three signal features (Detrended Fluctuation Analysis (DFA), Higuchi’s Fractal Dimension (HFD), and Lempel-Ziv Complexity (LZC)). Through the extracted features, an undirected graph is created using the features as a weight along the edges, with nodes as channels in EEG recording. The bifurcation of the subjects in either of the classes (MDD or N) is done by implementing spectral clustering. A 98% accuracy with a 2.5% of miss-classification error is achieved for the left hemisphere. In contrast, a 97% accuracy with a 3.3% CEP (or miss-classification error or Classification Error Percentage) is achieved for the right hemisphere. FP1 and F8 channels have achieved the highest possible level of classification accuracy.publishedVersio

    Recent Advances in mmWave-Radar-Based Sensing, Its Applications, and Machine Learning Techniques: A Review

    Get PDF
    Human gesture detection, obstacle detection, collision avoidance, parking aids, automotive driving, medical, meteorological, industrial, agriculture, defense, space, and other relevant fields have all benefited from recent advancements in mmWave radar sensor technology. A mmWave radar has several advantages that set it apart from other types of sensors. A mmWave radar can operate in bright, dazzling, or no-light conditions. A mmWave radar has better antenna miniaturization than other traditional radars, and it has better range resolution. However, as more data sets have been made available, there has been a significant increase in the potential for incorporating radar data into different machine learning methods for various applications. This review focuses on key performance metrics in mmWave-radar-based sensing, detailed applications, and machine learning techniques used with mmWave radar for a variety of tasks. This article starts out with a discussion of the various working bands of mmWave radars, then moves on to various types of mmWave radars and their key specifications, mmWave radar data interpretation, vast applications in various domains, and, in the end, a discussion of machine learning algorithms applied with radar data for various applications. Our review serves as a practical reference for beginners developing mmWave-radar-based applications by utilizing machine learning techniques.publishedVersio

    Spectrum cartography techniques, challenges, opportunities, and applications: A survey

    Get PDF
    The spectrum cartography finds applications in several areas such as cognitive radios, spectrum aware communications, machine-type communications, Internet of Things, connected vehicles, wireless sensor networks, and radio frequency management systems, etc. This paper presents a survey on state-of-the-art of spectrum cartography techniques for the construction of various radio environment maps (REMs). Following a brief overview on spectrum cartography, various techniques considered to construct the REMs such as channel gain map, power spectral density map, power map, spectrum map, power propagation map, radio frequency map, and interference map are reviewed. In this paper, we compare the performance of the different spectrum cartography methods in terms of mean absolute error, mean square error, normalized mean square error, and root mean square error. The information presented in this paper aims to serve as a practical reference guide for various spectrum cartography methods for constructing different REMs. Finally, some of the open issues and challenges for future research and development are discussed.publishedVersio

    Multi-Channel Time-Frequency Domain Deep CNN Approach for Machinery Fault Recognition Using Multi-Sensor Time-Series

    Get PDF
    In the industry, machinery failure causes catastrophic accidents and destructive damage to the machines. It causes the machinery to stop and reduces production, causing financial losses to the industry. As a result, identifying machine faults at an early stage is critical. With the rapid advancement in artificial intelligence-based methods, developing automated systems that can diagnose machinery faults is necessary and challenging. This paper proposes a multi-channel time-frequency domain deep convolutional neural network (CNN)-based approach for machinery fault diagnosis using multivariate time-series data from multisensors (tachometer, microphone, underhang bearing accelerometer, and overhand bearing accelerometer). The wavelet synchro-squeezed transform (WSST) based technique is used to evaluate the time-frequency images from the multivariate time-series data. The time-frequency images are fed into the multi-channel deep CNN model for automated fault detection. The proposed multi-channel deep CNN model is multi-headed, considering the time-frequency domain information of each channel time-series data for automated fault detection. The proposed model’s performance is compared to benchmark models regarding testing accuracy, total parameters, and model size. Experiments have shown that the proposed model outperforms benchmark models regarding classification accuracy. The proposed multi-channel CNN model has obtained the accuracy and F1-score values of 99.48% and 99% for fault classification using time-frequency images of multi-sensor data. Finally, the proposed model’s performance is measured regarding inference time when deployed on edge computing devices such as the Raspberry Pi and the Nvidia Jetson AGX Xavier.publishedVersio

    Efficient Hardware Architectures for Accelerating Deep Neural Networks: Survey

    Get PDF
    In the modern-day era of technology, a paradigm shift has been witnessed in the areas involving applications of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL). Specifically, Deep Neural Networks (DNNs) have emerged as a popular field of interest in most AI applications such as computer vision, image and video processing, robotics, etc. In the context of developed digital technologies and the availability of authentic data and data handling infrastructure, DNNs have been a credible choice for solving more complex real-life problems. The performance and accuracy of a DNN is a way better than human intelligence in certain situations. However, it is noteworthy that the DNN is computationally too cumbersome in terms of the resources and time to handle these computations. Furthermore, general-purpose architectures like CPUs have issues in handling such computationally intensive algorithms. Therefore, a lot of interest and efforts have been invested by the research fraternity in specialized hardware architectures such as Graphics Processing Unit (GPU), Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), and Coarse Grained Reconfigurable Array (CGRA) in the context of effective implementation of computationally intensive algorithms. This paper brings forward the various research works carried out on the development and deployment of DNNs using the aforementioned specialized hardware architectures and embedded AI accelerators. The review discusses the detailed description of the specialized hardware-based accelerators used in the training and/or inference of DNN. A comparative study based on factors like power, area, and throughput, is also made on the various accelerators discussed. Finally, future research and development directions are discussed, such as future trends in DNN implementation on specialized hardware accelerators. This review article is intended to serve as a guide for hardware architectures for accelerating and improving the effectiveness of deep learning research.publishedVersio

    Updating Thermal Imaging Dataset of Hand Gestures with Unique Labels

    Get PDF
    An update to the previously published low resolution thermal imaging dataset is presented in this paper. The new dataset contains high resolution thermal images corresponding to various hand gestures captured using the FLIR Lepton 3.5 thermal camera and Purethermal 2 breakout board. The resolution of the camera is with calibrated array of 19,200 pixels. The images captured by the thermal camera are light-independent. The dataset consists of 14,400 images with equal share from color and gray scale. The dataset consists of 10 different hand gestures. Each gesture has a total of 24 images from a single person with a total of 30 persons for the whole dataset. The dataset also contains the images captured under different orientations of the hand under different lighting conditions.publishedVersio
    • …
    corecore